This report presents an initial analysis of pro-social touch behaviors in the NWSL 2024 season, exploring how patterns of touch between teammates may be associated with team performance and match outcomes. Using a dataset of ~14,000 touch instances (filtered to ~7,000 pro-social touches), I tested four primary hypotheses:
Overall Touch & Team Success
Hypothesis: Teams with higher frequencies of pro-social touch
across the season would achieve better final standings.
Finding: No strong association was observed between total
pro-social touch frequency over the season and final standings (counter
to prior findings in basketball). Further work could explore alternate
modeling approaches (e.g., controlling for goal celebrations).
Match-Level Touch Variation & Outcomes
Hypothesis: Matches where teams exceeded their typical touch
behavior would be associated with better match outcomes.
Finding: A significant positive correlation was observed —
matches where teams displayed higher-than-usual prosocial touch behavior
were associated with more favorable goal differentials.
Underdog Hypothesis
Hypothesis: Higher levels of prosocial touch would be
particularly associated with better outcomes for underdog teams facing
stronger opponents.
Finding: Both observed data and a GAM model indicated that, for
underdog teams, higher levels of prosocial touch were associated with
more favorable match outcomes — suggesting that prosocial touch may be a
useful behavioral marker of cohesion or resilience in challenging
contexts.
Social Network & Reciprocity
Hypothesis: Teams with a higher ratio of reciprocal
(vs. non-reciprocal) touches, and more evenly distributed touch
behaviors across players, would finish higher in the standings.
Finding: A higher reciprocal-to-non-reciprocal touch ratio was
significantly associated with better standings. Measures of overall
touch distribution across players (CV, Gini) were not significantly
associated with final standings.
Inter-Rater Reliability:
An Intraclass Correlation Coefficient (ICC = 0.79) indicates good
agreement between coders, providing confidence in the reliability of the
dataset.
Overall Takeaway:
While total season-long touch frequency alone was not associated with
team success, the patterns and contexts of prosocial touch —
particularly in high-pressure or underdog situations — appear
meaningfully associated with match outcomes. These early results suggest
that prosocial touch behavior may serve as a useful marker of team
cohesion and resilience in professional sport, warranting further
study.
Hi team (Anne and Annett)! This is my first true stab at the data. I addressed 4 different hypotheses, each with their own sections. The first hypothesis does not seem to have a ‘positive’ outcome however the last three show some promise. With full intent to bias you, the underdog hypothesis (number 3) is my favorite.
I will step through each of the hypotheses in order (as seen in the executive summary above), with their corresponding R-code, outputs, and a quick discussion.
At the end, I have a quick mention to the Inter-Rater analysis. I had hoped to find a way to automate an event by event comparison, however the task seems to beating me. I am currently only doing ICC for sheer frequency of touches across each inter-rater match.
Additionally, here is a link to the git-hub repository containing my code and the raw data spreadsheets if you would like to take a look. I may have to give you permission to access them, apologies for the extra step: (https://github.com/paigeliebel/Soccer_Touch.git)
For readability, the code required for cleaning, merging, and fixing typos in the data frames was excluded from this report. You can see the code in the github.
Core Hypothesis: Pro-social touches between teammates may serve as an indicator for overall team cohesion. Therefore, we propose that teams with a greater frequency of these interactions will secure higher positions in the season’s final standings.
Definitions:
Pro-social touches are defined as all haptic rituals recorded excluding: Tactical Adjustments, Collisions, and Negative Touch
Situations excluded from analysis for this hypothesis: Goals For/Against, Substitutions
library(tidyverse)
library(data.table)
library(broom)
library(janitor)
library(readxl)
library(rmarkdown)
library(readr)
library (dplyr)
Exclude_Touch <- c("TA", "CO", "NEG")
Exclude_Situation <- c("GF", "GA", "SUB")
#Exclude_Visibility <- c("P")
#Creates data set for core hypothesis analysis
Touches_CoreHyp <- Touches_final %>%
filter(!(HapticRitual %in% Exclude_Touch)) %>%
filter(!(Situation %in% Exclude_Situation)) #%>%
#filter(!(Visibility %in% Exclude_Visibility))
#Count of frequency of touches per team
Touches_by_team <- Touches_CoreHyp %>%
mutate(Team = str_trim(as.character(Team))) %>%
count(Team, name = "TotalTouches")
# Make sure TeamID is padded to match
FinalStandings <- FinalStandings %>%
mutate(TeamID = str_pad(as.character(TeamID), width = 2, pad = "0"))
#Join touch counts with final standings
Team_Touches_Standings <- FinalStandings %>%
left_join(Touches_by_team, by = c("TeamID" = "Team")) %>%
filter(!is.na(TotalTouches))
#Plot with regression line : final rankings to frequency of touch
TouchFreq_vs_FinalStandings <- ggplot(Team_Touches_Standings, aes(x = Rank, y = TotalTouches)) +
geom_point(size = 3) +
geom_smooth(method = "lm", se = FALSE, color = "blue", linewidth = 1) +
scale_x_reverse() +
labs(
title = "Final Rank vs Overall Touch Frequency",
x = "Final Season Rank",
y = "Total Touches (Filtered)"
) +
theme_minimal()
TouchFreq_vs_FinalStandings_Stats <- cor_result <- cor.test(Team_Touches_Standings$TotalTouches, Team_Touches_Standings$Rank)
##
## Pearson's product-moment correlation
##
## data: Team_Touches_Standings$TotalTouches and Team_Touches_Standings$Rank
## t = -0.072115, df = 12, p-value = 0.9437
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.5453701 0.5154585
## sample estimates:
## cor
## -0.02081319
As seen in the results above, there does NOT appear to be a strong correlation between total frequency of pro-social touch and overall end of season success. This is counter to the Kraus Basketball paper who concluded more touch leads to more success.
However, I am interested in looking at other factors that could be indicative of touch paired to end of season success. The fact that I excluded goal celebrations is vital. Could we include Goal Celebration touches but control for the number of goals scored?
Perhaps, I can convince one of the masters students to dive into these other hypotheses.
Hypothesis 1.a: Teams that show less variability across matches throughout a season with regards to their touch will secure higher positions in the season’s final standings.
For the Sub-Hypothesis the touches were scaled for each team based on MAD (mean absolute distance). There was a need to scale touch per team as some teams are ‘touchier’ than others.
########## Within-Team Variability in Touch Frequency ##########
# Looks at the variability a team has across matches throughout the season
# Count touches per team per game from CoreHyp data frame
Touches_per_game <- Touches_CoreHyp %>%
group_by(Team, SeasonMatchNumber) %>%
summarise(TouchCount = n(), .groups = "drop")
# Computing Within-Team Variability
Team_touch_variability <- Touches_per_game %>%
group_by(Team) %>%
summarise(
MeanTouches = mean(TouchCount),
SDTouches = sd(TouchCount),
MinTouches = min(TouchCount),
MaxTouches = max(TouchCount),
NumGames = n(),
.groups = "drop"
)
# Join season rank to each team for ordering in the plot
Touches_per_game_ranked <- Touches_per_game %>%
mutate(Team = str_pad(as.character(Team), width = 2, pad = "0")) %>%
left_join(FinalStandings %>% select(TeamID, Rank), by = c("Team" = "TeamID")) %>%
filter(!is.na(Rank)) # make sure we only include ranked teams
# Visualize
TouchesPerGame_vs_rank <- ggplot(Touches_per_game_ranked, aes(x = Rank, y = TouchCount, group = Rank)) +
geom_boxplot(fill = "lightblue", color = "black") +
scale_x_reverse(breaks = 1:14) + # clean 1–14 axis
labs(
title = "Variation of Within-Team Touch Frequency per Game",
x = "Team (Ordered by Final Rank)",
y = "Touches per Game"
) +
theme_minimal()
# Scale touch based on distance from mean (MAD-based z-score)
# How extreme a touch count is compared to team's norm
Touches_scaled <- Touches_per_game %>%
group_by(Team) %>%
mutate(
MedianTouch = median(TouchCount),
MAD = mad(TouchCount), # median absolute deviation
ScaledTouch = (TouchCount - MedianTouch) / MAD
) %>%
ungroup()
# Join Touches_scaled with ranks
Touches_scaled_ranked <- Touches_scaled %>%
mutate(Team = str_pad(as.character(Team), width = 2, pad = "0")) %>%
left_join(FinalStandings %>% select(TeamID, Rank), by = c("Team" = "TeamID")) %>%
filter(!is.na(Rank))
# Plot of MAD
MAD_TouchesPerGame_vs_rank <- ggplot(Touches_scaled_ranked, aes(x = Rank, y = ScaledTouch, group = Rank)) +
geom_boxplot(fill = "lightblue", color = "black") +
geom_hline(yintercept = c(-2, 2), linetype = "dashed", color = "red") +
scale_x_reverse(breaks = 1:14) + # clean 1–14 axis
labs(
title = "Scaled Touch Deviation from Team Median",
subtitle = "Boxplot of (TouchCount - Median) / MAD per Team",
x = "Team (Ordered by Final Rank)",
y = "Scaled Touch Value (MAD Units)"
) +
theme_minimal()
########## Within-Team Variability in Touch Frequency vs Ranking ##########
# Join variability data to final standings
Variability_vs_Rank <- Team_touch_variability %>%
mutate(Team = str_pad(as.character(Team), width = 2, pad = "0")) %>%
left_join(FinalStandings %>% select(TeamID, Rank), by = c("Team" = "TeamID")) %>%
filter(!is.na(Rank))
# Plot SDTouches vs Rank
Within_Variability_vs_Rank <- ggplot(Variability_vs_Rank, aes(x = Rank, y = SDTouches)) +
geom_point(size = 3) +
geom_smooth(method = "lm", se = FALSE, color = "blue", linewidth = 1) +
scale_x_reverse(breaks = 1:14) +
labs(
title = "Team Variability in Touch vs Final Rank",
x = "Final Season Rank",
y = "Touch Frequency Variability (SD)"
) +
theme_minimal()
Within_Variability_vs_Rank_Stats <- cor.test(Variability_vs_Rank$SDTouches, Variability_vs_Rank$Rank)
#In regards to within-team variability including MAD scaling
Team_scaled_variability <- Touches_scaled %>%
group_by(Team) %>%
summarise(
SD_ScaledTouch = sd(ScaledTouch),
NumGames = n(),
.groups = "drop"
)
Team_scaled_variability_ranked <- Team_scaled_variability %>%
mutate(Team = str_pad(as.character(Team), width = 2, pad = "0")) %>%
left_join(FinalStandings %>% select(TeamID, Rank), by = c("Team" = "TeamID")) %>%
filter(!is.na(Rank))
Team_scaled_variability_ranked_plot <- ggplot(Team_scaled_variability_ranked, aes(x = Rank, y = SD_ScaledTouch)) +
geom_point(size = 3) +
geom_smooth(method = "lm", se = FALSE, color = "red", linewidth = 1) +
scale_x_reverse(breaks = 1:14) +
labs(
title = "Team Variability (SD of Scaled Touch) vs Final Rank",
x = "Final Season Rank",
y = "SD of Scaled Touch (MAD units)"
) +
theme_minimal()
ScaledTouch_Variability_vs_Rank_Stats <- cor.test(
Team_scaled_variability_ranked$SD_ScaledTouch,
Team_scaled_variability_ranked$Rank
)
The above graph is showing a breakdown of within-team touch variation per game. Each block is one team, with the most successful team ranked in place 1 on the far right. Each team played 26 games throughout the season. Therefore, each block has 26 data points from which is being built.
The above graph is similar to the previous one, however it scales the touches.
MAD was used instead of SD due to the non-normal nature of the data.
The following formulas were used:
Mean Absolute Deviation (MAD) \[ \text{MAD} = \text{median} \left( \left| X_i - \text{median}(X) \right| \right) \] Scaled Touch \[ \text{ScaledTouch}_i = \frac{ X_i - \text{median}(X) }{ \text{MAD} } \]
Then I looked at the with-in team variation across teams. In other words, does consistency of touch frequencies correlate to end of season standings? I looked at just standard deviation of overall touch, not scaled:
##
## Pearson's product-moment correlation
##
## data: Variability_vs_Rank$SDTouches and Variability_vs_Rank$Rank
## t = 0.96201, df = 12, p-value = 0.355
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.3065156 0.6989310
## sample estimates:
## cor
## 0.2675808
Then I took a look at the scaled touch version:
Team_scaled_variability_ranked_plot
## `geom_smooth()` using formula = 'y ~ x'
ScaledTouch_Variability_vs_Rank_Stats
##
## Pearson's product-moment correlation
##
## data: Team_scaled_variability_ranked$SD_ScaledTouch and Team_scaled_variability_ranked$Rank
## t = 1.5994, df = 12, p-value = 0.1357
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1432663 0.7769560
## sample estimates:
## cor
## 0.4191765
Neither proved to be statistically significant.
With the scaled version, there is a moderate positive correlation: a higher SD of scaled touch is somewhat associated with worse ranking. In other words, there is a slight possible relationship between more consistency of touch frequency and end of season ranking.
The frequency of pro-social touch across teams may vary due to confounding variables such as team culture or tactics. Therefore, instead of analyzing team to team variations, we suggest that the inter-match variability of pro-social touches within an individual team could be used as an indicator for individual match outcomes. We propose that teams with more pro-social touches within a match than their respective season average are more likely to have a higher goal differential.
Statistically Significant Results. Yay.
Note: Whereas hypothesis 1 looked at outcomes for the whole season, hypothesis 2 is looking at a match level outcome.
For each team, we calculated how many touches they made in a match above or below their own season average (in raw touch counts). We then tested whether matches where a team touched more than usual were associated with better goal differential, using Pearson correlation and Wilcoxon rank-sum tests. I ran a Wilcoxon test in fear that the results were not nicely distributed.
To further test whether match outcomes differed between matches with above- or below-average team touch behavior, we conducted a Wilcoxon rank-sum test (non-parametric, due to non-normal distribution of goal differentials). The test compared goal differential in matches where teams touched more than their season average versus those with below-average touch counts.
Scaling of touch took a simpler approach given that we are looking at individual matches. I did how ‘touchy’ this match was compared to a team’s season average:
\[ \text{ScaledAboveAvg} = \text{TouchCount}_{\text{match}} - \text{SeasonAvgTouch}_{\text{team}} \]
# Inter-Match Variability Hypothesis
# For more information on these data frames please look at the README.md file
library(tidyverse)
library(data.table)
library(broom)
library(janitor)
library(readxl)
library(rmarkdown)
library(readr)
library (dplyr)
#Ensure to use the correct dfs. Touches_final and Matches_final are correct. They only include assigned rater data, no repeat matches
#Check to make sure data frames are loaded:
if (!exists("Touches_final") | !exists("Matches_final") | !exists("FinalStandings") | !exists("Touches_CoreHyp")) {
stop("Touches_final, Matches_final, Touhe_CoreHyp or FinalStandings not loaded. Check Data_Management.R and Core_Hypothesis.R.")
}
############## Inter-Match Variability Hypothesis ##############
# Basically asking: "When a team is more (or less) touchy than usual, do they score more or less goals than their opponent?"
# Note that this is still using same CoreHyp touches (therefore only prosocial touches and not including GF, GA, Subs etc)
#Clean Match column data for use
Matches_final_cleaned <- Matches_final %>%
mutate(
GoalsFor = as.numeric(str_trim(GoalsFor)),
GoalsAgainst = as.numeric(str_trim(GoalsAgainst))
)
#Get TeamID into Matches_final
Matches_finalID <- Matches_final_cleaned %>%
mutate(
MatchID = str_pad(MatchID, width = 4, pad = "0"), # in case it was shortened
TeamID = str_sub(MatchID, 1, 2), # preserve leading zeros
GoalDiff = GoalsFor - GoalsAgainst
)
# Prosocial touches per team per match
Touches_per_match <- Touches_CoreHyp %>%
group_by(Team, SeasonMatchNumber) %>%
summarise(TouchCount = n(), .groups = "drop")
# Average touch per team over the season
Team_season_avg <- Touches_per_match %>%
group_by(Team) %>%
summarise(SeasonAvgTouch = mean(TouchCount), .groups = "drop")
# Merge & calculate scaled deviation from average
Touches_scaled_dev <- Touches_per_match %>%
left_join(Team_season_avg, by = "Team") %>%
mutate(ScaledAboveAvg = TouchCount - SeasonAvgTouch) # positive = more touchy than average, neg = less touchy than average
# Joins dataframes (goal differetials to the touches scaled)
Touch_GoalDiff_Analysis <- Touches_scaled_dev %>%
left_join(
Matches_finalID %>% select(SeasonMatchNumber, TeamID, GoalDiff),
by = c("SeasonMatchNumber", "Team" = "TeamID")
)
# Visualize
Touch_GoalDiff_Analysis_Graph <- ggplot(Touch_GoalDiff_Analysis, aes(x = ScaledAboveAvg, y = GoalDiff)) +
geom_point(size = 2, alpha = 0.7) +
geom_smooth(method = "lm", se = FALSE, color = "blue") +
labs(
title = "Touch Count Deviation vs Match Goal Differential",
x = "Touches Above/Below Team Average",
y = "Goal Differential",
caption = "Note: Each dot represents one match outcome for a team, therefore 2 dots for each match"
) +
theme_minimal()
# Pearson
Touch_GoalDiff_Analysis_Stats <- cor.test(Touch_GoalDiff_Analysis$ScaledAboveAvg, Touch_GoalDiff_Analysis$GoalDiff)
############## Wilcoxon: Inter-Match Variability Hypothesis ##############
# Wilcoxon: Perhaps not nicely distributed: “Do teams tend to have higher goal differentials when they are more touchy than their season average?”
# Create AboveAvgTouch flag
Touch_AboveBelow_Analysis <- Touch_GoalDiff_Analysis %>%
mutate(AboveAvgTouch = ScaledAboveAvg > 0)
# Summary of goal differentials by touch group
Touch_AboveBelow_Analysis %>%
group_by(AboveAvgTouch) %>%
summarise(
n = n(),
mean_GD = mean(GoalDiff, na.rm = TRUE),
median_GD = median(GoalDiff, na.rm = TRUE),
sd_GD = sd(GoalDiff, na.rm = TRUE)
)
## # A tibble: 2 × 5
## AboveAvgTouch n mean_GD median_GD sd_GD
## <lgl> <int> <dbl> <dbl> <dbl>
## 1 FALSE 202 -0.444 -1 1.52
## 2 TRUE 162 0.581 1 1.69
# Wilcoxon rank-sum test
Wilcox_Test <- wilcox.test(GoalDiff ~ AboveAvgTouch, data = Touch_AboveBelow_Analysis)
# Visualize Wilcoxon
Wilcox_Test_Graph <- ggplot(Touch_AboveBelow_Analysis, aes(x = AboveAvgTouch, y = GoalDiff)) +
geom_boxplot(fill = "lightblue") +
scale_x_discrete(labels = c("FALSE" = "Below Avg Touch", "TRUE" = "Above Avg Touch")) +
labs(
title = "Goal Differential by Above/Below Avg Touch",
x = "Above Team's Avg Touch?",
y = "Goal Differential"
) +
theme_minimal()
##
## Pearson's product-moment correlation
##
## data: Touch_GoalDiff_Analysis$ScaledAboveAvg and Touch_GoalDiff_Analysis$GoalDiff
## t = 5.6914, df = 312, p-value = 2.906e-08
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.2028881 0.4036665
## sample estimates:
## cor
## 0.306685
On the graph above: Positive values indicate matches where the team touched more than usual; negative values indicate matches with lower than usual touch counts.
We observed a statistically significant positive correlation between match-level touch deviation and goal differential (r = 0.31, p < 0.001, 95% CI [0.20, 0.40]). In other words, matches where a team touched more than their season average were associated with higher goal differentials.
Wilcox_Test
##
## Wilcoxon rank sum test with continuity correction
##
## data: GoalDiff by AboveAvgTouch
## W = 7714, p-value = 1.949e-08
## alternative hypothesis: true location shift is not equal to 0
Wilcox_Test_Graph
## Warning: Removed 50 rows containing non-finite outside the scale range
## (`stat_boxplot()`).
The Wilcoxon test categorized the team to compare two types of matches: matches where the teams touched more than average and matches where the team touched less than average.
We tested: Did these two groups have different goal differentials?
Reasons to use Wilcoxon: Non-parametric, grouping not by team but simply by above and below average, goal differential is not normally distributed (lots of 1s but not too many 5 or 6).
Results of Wilcoxon Test: A Wilcoxon rank-sum test showed that goal differentials were significantly higher in matches where teams touched more than their season average compared to matches with below-average touch counts (W = 7714, p < 0.001).
Teams with greater frequencies of pro-social touch when competing against higher ranked teams will secure a better goal differential than teams with fewer touch instances facing an opponent of the same spread in rankings.
Statistically Significant Result! Yay.
Definition of spread: Spread in rankings is defined as the difference in current standings the teams have between each other. Therefore, the 1st ranked team competing against the 7th ranked team has the same spread as the 7th ranked team playing against the 14th ranked team. This could expose a team’s cohesion by how they react when facing adversity.
Higher positive spread = strong team vs weak team Negative spread = underdog team
In other words → touch might help underdog teams cope with adversity.
For each match, computed how touchy the team was vs. its typical behavior: ScaledTouch = (TouchCount - Median) / MAD
I used both an ANOVA (which described nothing of interest) and a GAM model (which I love).
Note: First 13 matches of season are excluded. Not all teams play the first weekend of the season. Therefore, the current standings only have a ‘comprehensive picture’ going into the 3rd weekend. In other words, after match 13, at least every team has completed 1 game.
Additionally, for the categorical analysis I arbitrarily determined that a spread between 0-7 was a “mild underdog” or a “mild favorite. A spread greater than 7 was considered”major”. This is simply because there are 14 teams in the league and a spread of 7 would place a team on teh other half of the table.
SpreadGroup = Major Underdog, Mild Underdog, Even, Mild Favorite, Major Favorite
TouchGroup = High Touch (≥1 MAD), Low Touch (≤-1 MAD), Average Touch
# Underdog Hypothesis
library(tidyverse)
library(data.table)
library(broom)
library(janitor)
library(readxl)
library(rmarkdown)
library(readr)
library (dplyr)
library(plotly)
library(mgcv)
library(ggplot2)
library(forcats)
#Ensure to use the correct dfs. Touches_final and Matches_final are correct. They only include assigned rater data, no repeat matches
#Check to make sure data frames are loaded:
if (!exists("Touches_final") | !exists("Touches_scaled") | !exists("Matches_finalID") | !exists("FinalStandings") | !exists("Touches_CoreHyp")) {
stop("Touches_final, Matches_final, Touhe_CoreHyp or FinalStandings not loaded. Check Data_Management.R and Core_Hypothesis.R.")
}
############################ Underdog Hypothesis ############################
# Clean Match column data for use
Matches_final_cleaned_CurrentStandings <- Matches_final %>%
mutate(
GoalsFor = as.numeric(str_trim(GoalsFor)),
GoalsAgainst = as.numeric(str_trim(GoalsAgainst)),
CurrentStanding = as.numeric(str_trim(CurrentStanding)),
MatchID = str_pad(MatchID, width = 4, pad = "0"), # ensure 4-digit MatchID
TeamID = str_sub(MatchID, 1, 2), # extract TeamID from MatchID
SeasonMatchNumber = as.numeric(SeasonMatchNumber) # ensure it can be compared numerically
) %>%
filter(SeasonMatchNumber > 13) # exclude first 13 matches
# Get the spread into the info for each team
Matches_final_Spread <- Matches_final_cleaned_CurrentStandings %>%
rename_with(~ paste0(.x, "_self")) %>% #renames every column so that you know which row refers to the self team of analysis
inner_join(
Matches_final_cleaned_CurrentStandings,
by = c("SeasonMatchNumber_self" = "SeasonMatchNumber") # joins data frame to itself, matching each game via seasonmatchnumber (_self is of interst) (wihtout is opponent)
) %>%
filter(TeamID_self != TeamID) %>% # Make sure we’re not joining a row to itself
mutate(
GoalDiff = GoalsFor_self - GoalsAgainst_self,
Spread = CurrentStanding - CurrentStanding_self # positive = better ranked than opponent, negative = underdog
) %>%
select(
SeasonMatchNumber = SeasonMatchNumber_self,
MatchID = MatchID_self,
TeamID = TeamID_self,
GoalDiff,
Spread,
CurrentStanding = CurrentStanding_self,
OpponentTeamID = TeamID,
OpponentStanding = CurrentStanding
)
#Correcting data types
Touches_scaled_numeric <- Touches_scaled %>%
mutate(
SeasonMatchNumber = as.numeric(SeasonMatchNumber),
Team = as.character(Team) # just to ensure consistency
)
# Data frame creation for Underdog analysis
Underdog_Analysis <- Matches_final_Spread %>%
left_join(
Touches_scaled_numeric,
by = c("SeasonMatchNumber", "TeamID" = "Team")
)
############################ Observed Data Table Summary | Underdog Hypothesis ############################
spread_cutoff <- 7 #arbitrary spread number: I like 7 because it separates the table in half (1st rank team playing against bottom half of table)
# Categorize real match data into underdog/favored + touch level
Underdog_Observed_Summary <- Underdog_Analysis %>%
filter(!is.na(GoalDiff) & !is.na(ScaledTouch) & !is.na(Spread)) %>% # ensure clean data
mutate(
SpreadGroup = case_when(
Spread <= -spread_cutoff ~ "Major Underdog",
Spread > -spread_cutoff & Spread < 0 ~ "Mild Underdog",
Spread == 0 ~ "Even",
Spread > 0 & Spread < spread_cutoff ~ "Mild Favorite",
Spread >= spread_cutoff ~ "Major Favorite"
),
TouchGroup = case_when(
ScaledTouch >= 1 ~ "High Touch",
ScaledTouch <= -1 ~ "Low Touch",
TRUE ~ "Average Touch"
)
) %>%
group_by(SpreadGroup, TouchGroup) %>%
summarise(
MeanObservedGD = mean(GoalDiff, na.rm = TRUE),
n = n(),
.groups = "drop"
) %>%
arrange(SpreadGroup, TouchGroup)
#Bar chart for this data: I think easier to understand than the cool looking 3D chart generated below.
# Set factor levels for order
spread_levels <- c("Major Underdog", "Mild Underdog", "Even", "Mild Favorite", "Major Favorite")
touch_levels <- c("Low Touch", "Average Touch", "High Touch")
# Make sure SpreadGroup and TouchGroup are ordered
Underdog_Observed_Summary <- Underdog_Observed_Summary %>%
mutate(
SpreadGroup = factor(SpreadGroup, levels = spread_levels),
TouchGroup = factor(TouchGroup, levels = touch_levels)
)
# Plot observed data
Barchart_Categorical_Data <- ggplot(Underdog_Observed_Summary, aes(x = SpreadGroup, y = MeanObservedGD, fill = TouchGroup)) +
geom_col(position = position_dodge(width = 0.8), width = 0.7) +
geom_text(
aes(label = paste0("n=", n)),
position = position_dodge(width = 0.8),
vjust = ifelse(Underdog_Observed_Summary$MeanObservedGD >= 0, -0.5, 1.2),
size = 3.5
) +
scale_fill_brewer(palette = "Blues") +
labs(
title = "Observed Goal Differential by Underdog/Favorite Status and Touch Level",
x = "Underdog/Favorite Status (Spread Group)",
y = "Mean Goal Differential",
fill = "Touch Level"
) +
theme_minimal() +
theme(legend.position = "bottom")
# Statistically compare goal differentials across spreadgroup and touchgroup
# Via a two-way ANOVa
# Make sure grouping variables are factors
Underdog_Observed_ANOVA <- Underdog_Analysis %>%
filter(!is.na(GoalDiff) & !is.na(ScaledTouch) & !is.na(Spread)) %>%
mutate(
SpreadGroup = case_when(
Spread <= -spread_cutoff ~ "Major Underdog",
Spread > -spread_cutoff & Spread < 0 ~ "Mild Underdog",
Spread == 0 ~ "Even",
Spread > 0 & Spread < spread_cutoff ~ "Mild Favorite",
Spread >= spread_cutoff ~ "Major Favorite"
),
TouchGroup = case_when(
ScaledTouch >= 1 ~ "High Touch",
ScaledTouch <= -1 ~ "Low Touch",
TRUE ~ "Average Touch"
),
SpreadGroup = factor(SpreadGroup, levels = spread_levels),
TouchGroup = factor(TouchGroup, levels = touch_levels)
)
# Run Two-Way ANOVA
anova_result <- aov(GoalDiff ~ SpreadGroup * TouchGroup, data = Underdog_Observed_ANOVA)
ANOVASUM <- summary(anova_result)
TukeryAnova <- TukeyHSD(anova_result)
#Summary Table of Means and SDs per Group
Underdog_Observed_ANOVA %>%
group_by(SpreadGroup, TouchGroup) %>%
summarise(
Mean_GD = mean(GoalDiff, na.rm = TRUE),
SD_GD = sd(GoalDiff, na.rm = TRUE),
n = n(),
.groups = "drop"
) %>%
arrange(SpreadGroup, TouchGroup)
## # A tibble: 12 × 5
## SpreadGroup TouchGroup Mean_GD SD_GD n
## <fct> <fct> <dbl> <dbl> <int>
## 1 Major Underdog Low Touch -1.88 1.89 8
## 2 Major Underdog Average Touch -1.08 1.41 25
## 3 Major Underdog High Touch -0.6 1.67 5
## 4 Mild Underdog Low Touch -1.35 1.06 17
## 5 Mild Underdog Average Touch -0.365 1.52 63
## 6 Mild Underdog High Touch 0.481 1.53 27
## 7 Mild Favorite Low Touch -0.714 1.70 7
## 8 Mild Favorite Average Touch 0.266 1.51 79
## 9 Mild Favorite High Touch 0.810 1.57 21
## 10 Major Favorite Low Touch 1.25 1.71 4
## 11 Major Favorite Average Touch 1.2 1.61 30
## 12 Major Favorite High Touch 1 1.41 4
#ANOVA categorically dos not say that spread AND touch together affects goal differential
#It does say that underdogs are more likely to lose (duh) and that more touch is better (duh)
#Look at GAM Model to see other stuff, there it creates a a relationship between the three values
############################ GAM Model | Underdog Hypothesis ############################
#This is asking: If a team is more/less touchy than usual, and they are underdog/overdog, how does this impact the goal differenetial?
# Fit GAM model to allow for nonlinear effects
# Changing k = 15 creates completely flattens slope
gam_model <- gam(
GoalDiff ~ s(Spread, ScaledTouch, k = 100, bs = "tp"), # increase k for smoother fit
data = Underdog_Analysis
)
# Create grid for predictions
spread_seq <- seq(min(Underdog_Analysis$Spread, na.rm = TRUE),
max(Underdog_Analysis$Spread, na.rm = TRUE), length.out = 50)
touch_seq <- seq(min(Underdog_Analysis$ScaledTouch, na.rm = TRUE),
max(Underdog_Analysis$ScaledTouch, na.rm = TRUE), length.out = 50)
grid <- expand.grid(Spread = spread_seq, ScaledTouch = touch_seq)
grid$GoalDiff <- predict(gam_model, newdata = grid)
# Convert to matrix for surface
z_matrix <- matrix(grid$GoalDiff, nrow = length(spread_seq), ncol = length(touch_seq))
# 3D Plot: See below in next R chunk
#Note: Each dot represents one match outcome for a team, therefore 2 dots for each match
# plot_ly() %>%
# add_surface(
# x = ~spread_seq,
# y = ~touch_seq,
# z = ~z_matrix,
# colorscale = list(
# c(0, "red"), # red for losses
# c(1, "green") # green for wins
# ),
# cmin = min(Underdog_Analysis$GoalDiff, na.rm = TRUE),
# cmax = max(Underdog_Analysis$GoalDiff, na.rm = TRUE),
# opacity = 0.7,
# showscale = TRUE
# ) %>%
# add_markers(
# data = Underdog_Analysis,
# x = ~Spread,
# y = ~ScaledTouch,
# z = ~GoalDiff,
# marker = list(
# size = 3,
# color = ~GoalDiff,
# colorscale = list(c(0, "#ff0000"), c(1, "#00ff00")), # flipped: red = low, green = high
# cmin = min(Underdog_Analysis$GoalDiff, na.rm = TRUE),
# cmax = max(Underdog_Analysis$GoalDiff, na.rm = TRUE)
# ),
# name = "Observed"
# ) %>%
# layout(
# title = "Underdog Hypothesis: Spread x Touch Deviation x Goal Differential",
# scene = list(
# xaxis = list(title = "Spread (Opponent Rank - Team Rank)"),
# yaxis = list(title = "Scaled Touch Deviation"),
# zaxis = list(title = "Goal Differential")
# )
# )
############################ GAM Model CATEGORICAL Table Summary (Interpretation of GAM) | Underdog Hypothesis ############################
# Evaluates how a team's goal differential is predicated by a model across a spectrum of two predictors:
# Predictor One = Ranking Spread
# Predictor Two = Scaled Touch Deviation (how much more or less physical touch a team used compared to their norm)
# Backpedals to the observed table CATEGORICAL idea.
# Define a grid of Spread and ScaledTouch values
# Create a sequence of 100 evenly spaced values from smallest to largest spread
spread_vals <- seq(min(Underdog_Analysis$Spread, na.rm = TRUE),
max(Underdog_Analysis$Spread, na.rm = TRUE),
length.out = 100)
# Create a sequence of 100 evenly spaced values from smallest to largest touch
touch_vals <- seq(min(Underdog_Analysis$ScaledTouch, na.rm = TRUE),
max(Underdog_Analysis$ScaledTouch, na.rm = TRUE),
length.out = 100)
# Create a ten thousand row data frame by combining all hundred by hundred values above
grid <- expand.grid(Spread = spread_vals, ScaledTouch = touch_vals)
# Predict GoalDiff across the grid on each fake game (ten thousand of them)
grid$PredictedGoalDiff <- predict(gam_model, newdata = grid)
#Note choice of 7 as the cutoff is somewhat arbitrary
grid_summary <- grid %>%
mutate(
SpreadGroup = case_when(
Spread <= -spread_cutoff ~ "Major Underdog",
Spread > -spread_cutoff & Spread < 0 ~ "Mild Underdog",
Spread == 0 ~ "Even",
Spread > 0 & Spread < spread_cutoff ~ "Mild Favorite",
Spread >= spread_cutoff ~ "Major Favorite"
),
TouchGroup = case_when(
ScaledTouch >= 1 ~ "High Touch",
ScaledTouch <= -1 ~ "Low Touch",
TRUE ~ "Average Touch"
)
) %>%
group_by(SpreadGroup, TouchGroup) %>%
summarise(
MeanPredGD = mean(PredictedGoalDiff, na.rm = TRUE),
.groups = "drop"
) %>%
arrange(SpreadGroup, TouchGroup)
#Bar graph to see the GAM model major/mild underdogs versus tables
# Set same factor levels for consistency
grid_summary <- grid_summary %>%
mutate(
SpreadGroup = factor(SpreadGroup, levels = spread_levels),
TouchGroup = factor(TouchGroup, levels = touch_levels)
)
# Plot GAM model data
GAM_Model_Data_Plot <- ggplot(grid_summary, aes(x = SpreadGroup, y = MeanPredGD, fill = TouchGroup)) +
geom_col(position = position_dodge(width = 0.8), width = 0.7) +
scale_fill_brewer(palette = "Blues") +
labs(
title = "GAM Model Predicted Goal Differential by Underdog/Favorite Status and Touch Level",
x = "Underdog/Favorite Status (Spread Group)",
y = "Predicted Goal Differential",
fill = "Touch Level"
) +
theme_minimal() +
theme(legend.position = "bottom")
#Output for Hypothesis 3:
| SpreadGroup | TouchGroup | MeanObservedGD | n |
|---|---|---|---|
| Major Favorite | Average Touch | 1.20 | 30 |
| Major Favorite | High Touch | 1.00 | 4 |
| Major Favorite | Low Touch | 1.25 | 4 |
| Major Underdog | Average Touch | -1.08 | 25 |
| Major Underdog | High Touch | -0.60 | 5 |
| Major Underdog | Low Touch | -1.88 | 8 |
| Mild Favorite | Average Touch | 0.27 | 79 |
| Mild Favorite | High Touch | 0.81 | 21 |
| Mild Favorite | Low Touch | -0.71 | 7 |
| Mild Underdog | Average Touch | -0.37 | 63 |
| Mild Underdog | High Touch | 0.48 | 27 |
| Mild Underdog | Low Touch | -1.35 | 17 |
Note: This graph is showing true, observed data. It is not from a model.
For the above graph, a categorical approach was used:
SpreadGroup = Major Underdog, Mild Underdog, Even, Mild Favorite, Major Favorite
TouchGroup = High Touch (≥1 MAD), Low Touch (≤-1 MAD), Average Touch
In the observed match data, goal differentials were associated with both touch level and underdog/favorite status. Among major underdogs, higher-touch matches were linked to more favorable outcomes (Mean GD = -0.60 for high-touch vs. -1.88 for low-touch matches). Mild underdogs also showed a similar pattern, with high-touch matches associated with positive mean goal differentials (+0.48), compared to negative outcomes for low-touch matches (-1.35). Mild favorites likewise showed more positive outcomes with higher touch. In contrast, among major favorites, touch level appeared to have less association with outcome, as these teams generally won regardless of touch level. These associations suggest that prosocial touch behaviors may be more strongly linked to match outcomes when teams face greater competitive challenges (as underdogs).
##
## Family: gaussian
## Link function: identity
##
## Formula:
## GoalDiff ~ s(Spread, ScaledTouch, k = 100, bs = "tp")
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.057e-11 8.691e-02 0 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(Spread,ScaledTouch) 2 2 43.67 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.228 Deviance explained = 23.3%
## GCV = 2.2134 Scale est. = 2.1905 n = 290
The Generalized Additive Model (GAM) used the following form:
\[ \text{GoalDiff} \sim s(\text{Spread}, \text{ScaledTouch}) \]
where \(s()\) is a smooth spline function allowing for nonlinear effects of Spread and ScaledTouch on GoalDiff.
The GAM model revealed a significant nonlinear association between Spread, ScaledTouch, and Goal Differential (smooth term: edf = 2.0, F = 43.67, p < 0.001). The model explained approximately 23% of the variance in goal differential (adjusted R² = 0.23), supporting the hypothesis that pro-social touch behaviors are linked to match outcomes, particularly in relation to underdog status.
The data suggest that when teams are underdogs, using more prosocial touch is linked to better match outcomes — hinting that these behaviors might help teams maintain cohesion and resilience against stronger opponents.
I then created another bar chart from the model instead of observed values. So this bar chart shows the model’s predicted average outcome (GoalDiff) for each category.
grid_summary <- grid %>%
mutate(
SpreadGroup = case_when(
Spread <= -spread_cutoff ~ "Major Underdog",
Spread > -spread_cutoff & Spread < 0 ~ "Mild Underdog",
Spread == 0 ~ "Even",
Spread > 0 & Spread < spread_cutoff ~ "Mild Favorite",
Spread >= spread_cutoff ~ "Major Favorite"
),
TouchGroup = case_when(
ScaledTouch >= 1 ~ "High Touch",
ScaledTouch <= -1 ~ "Low Touch",
TRUE ~ "Average Touch"
)
) %>%
group_by(SpreadGroup, TouchGroup) %>%
summarise(
MeanPredGD = mean(PredictedGoalDiff, na.rm = TRUE),
.groups = "drop"
) %>%
arrange(SpreadGroup, TouchGroup)
GAM_Model_Data_Plot
knitr::kable(grid_summary, digits = 2, caption = "GAM Model Predicted Goal Differential by Spread and Touch Level")
| SpreadGroup | TouchGroup | MeanPredGD |
|---|---|---|
| Major Favorite | Average Touch | 1.12 |
| Major Favorite | High Touch | 2.73 |
| Major Favorite | Low Touch | 0.46 |
| Major Underdog | Average Touch | -1.22 |
| Major Underdog | High Touch | 0.39 |
| Major Underdog | Low Touch | -1.88 |
| Mild Favorite | Average Touch | 0.36 |
| Mild Favorite | High Touch | 1.97 |
| Mild Favorite | Low Touch | -0.30 |
| Mild Underdog | Average Touch | -0.46 |
| Mild Underdog | High Touch | 1.15 |
| Mild Underdog | Low Touch | -1.12 |
The GAM-based summary plot highlights that prosocial touch behaviors are most strongly linked to improved outcomes for underdog teams. High-touch underdogs were predicted to perform markedly better than low-touch underdogs, whereas for favored teams, touch level had less impact on predicted outcomes.
The observed and modeled results both indicated that match outcomes were associated with varying levels of prosocial touch behavior, particularly in relation to competitive context. In both the raw data and the GAM model, mild underdogs showed the largest positive association between higher touch levels and goal differential. The GAM model further suggested stronger nonlinear associations for mild favorites, with predicted goal differentials increasing with higher touch. Among major favorites, both observed and modeled results showed relatively consistent outcomes across touch levels, though the model indicated a possible association between higher touch and larger predicted margins. These results suggest that prosocial touch frequency may serve as a useful indicator of match dynamics, particularly when teams face moderate competitive adversity.
##
## Call:
## lm(formula = Rank ~ Reciprocal_To_NonRecip_Ratio, data = Reciprocal_vs_Rank)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.8756 -2.5819 0.0691 2.4197 4.3007
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 23.517 5.341 4.403 0.00086 ***
## Reciprocal_To_NonRecip_Ratio -18.308 6.022 -3.040 0.01027 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.273 on 12 degrees of freedom
## Multiple R-squared: 0.4351, Adjusted R-squared: 0.388
## F-statistic: 9.242 on 1 and 12 DF, p-value: 0.01027
Hypothesis 4a examined whether the ratio of reciprocal to non-reciprocal touch events was associated with final season standings. The results showed a significant association: teams with higher reciprocal touch ratios tended to finish higher in the final standings, suggesting that reciprocal touch behavior may serve as a useful indicator of positive team dynamics.
The linear model showed that reciprocal touch ratio was significantly associated with final season standings (β = -18.31, p = 0.01, R² = 0.44). Teams with higher reciprocal-to-non-reciprocal touch ratios tended to finish higher in the standings, supporting the hypothesis that reciprocal interactions may reflect stronger team cohesion.
TouchiestPlayers_Rank
CV_TouchConcentraion_Rank
CV_Stats
##
## Pearson's product-moment correlation
##
## data: team_cv_rank$CV and team_cv_rank$Rank
## t = -0.88655, df = 12, p-value = 0.3927
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.6880078 0.3254580
## sample estimates:
## cor
## -0.2479353
Concentration_top3
Concentration_top3_Stats
##
## Pearson's product-moment correlation
##
## data: touch_concentration$Top3_Proportion and touch_concentration$Rank
## t = -1.4207, df = 12, p-value = 0.1809
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.7575206 0.1892233
## sample estimates:
## cor
## -0.3794525
Hypothesis 4b evaluated whether a more even distribution of touch behaviors across players was linked to team success. Measures of touch concentration (Coefficient of Variation, Gini coefficient, and proportion of touches by top 3 players) were not significantly associated with final rank. This suggests that while reciprocal interactions may be indicative of team cohesion, overall equality of touch distribution across players does not appear to predict season outcomes.
To assess inter-rater reliability, we computed an Intraclass Correlation Coefficient (ICC), using a two-way random effects model (agreement, single rater). The ICC assesses consistency in the number of prosocial touches recorded by different raters across the same set of 12 matches. The resulting ICC value provides an estimate of the reliability of the coding protocol for touch frequency.
The observed ICC of 0.79 indicates that raters showed good consistency in coding prosocial touch frequency, providing confidence in the reliability of the dataset used for further analyses.
#Inter-Rater Analysis
#Goal is to determine how "similar" raters were
#We have a total of 12 matches that all raters watched
library(tidyverse)
library(data.table)
library(broom)
library(janitor)
library(readxl)
library(rmarkdown)
library(readr)
library(dplyr)
library(irr)
library(fuzzyjoin)
library(knitr)
source("Data_Management.R") #Runs and brings in Matches_final from Data_Management.R script
#Dataframe "Touches_interrater" contains all the touches recorded for this analysis
# Should I exclude
# Situations excluded from core analysis for these hypotheses: Goals For/Against, Substitutions
Exclude_Touch <- c("TA", "CO", "NEG")
Exclude_Situation <- c("GF", "GA", "SUB")
Interrater <- Touches_interrater %>% #duplicate and filter
filter(!(HapticRitual %in% Exclude_Touch)) %>%
filter(!(Situation %in% Exclude_Situation))
############################ Simple Frequency Check ############################
#Count check per match (simply how many each rater saw)
touch_counts <- Interrater %>%
group_by(SeasonMatchNumber, Rater) %>%
summarise(TouchCount = n(), .groups = "drop")
# Plot frequency per match per rater
interrater_plot <- ggplot(touch_counts, aes(x = factor(SeasonMatchNumber), y = TouchCount, fill = Rater)) +
geom_bar(stat = "identity", position = "dodge") +
labs(title = "Touch Count per Match by Rater",
x = "Season Match Number",
y = "Touch Count") +
theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
###ICC - Intraclass Correlation Coefficient - Interval data:
# reshape your data: rows = matches, columns = raters
icc_data <- Interrater %>%
group_by(SeasonMatchNumber, Rater) %>%
summarise(Count = n(), .groups = "drop") %>%
pivot_wider(names_from = Rater, values_from = Count)
# Apply ICC (two-way random effects model)
icc_result <- icc(icc_data[,-1], model = "twoway", type = "agreement", unit = "single")
## Single Score Intraclass Correlation
##
## Model: twoway
## Type : agreement
##
## Subjects = 12
## Raters = 3
## ICC(A,1) = 0.788
##
## F-Test, H0: r0 = 0 ; H1: r0 > 0
## F(11,23.9) = 11.9 , p = 3.33e-07
##
## 95%-Confidence Interval for ICC Population Values:
## 0.551 < ICC < 0.926
Quick Summary of Data/Sample Sizes:
Note the filtered touch count is what was mostly used in the above analysis. The cut from about 14000 touch instances to 7000 instances is due to excluding substitutes, goal celebrations, and situations such as collisions which were not defined as pro-social touches.
# Data Summary and Overview
# Gives Simple overall counts, tables, data make-up etc
library(tidyverse)
library(data.table)
library(broom)
library(janitor)
library(readxl)
library(rmarkdown)
library(readr)
library (dplyr)
############################ Complete data summary | No filtering ############################
#Create Tables that summarizes complete data of "Touches_final" and "Matches_final"
Touches_Summary <- Touches_final
Matches_Summary <- Matches_final
Total_Touch_Instance_count <- nrow(Touches_Summary) #count total number of touches recorded
Total_Match_Count <- n_distinct(Touches_Summary$SeasonMatchNumber) #total number of matches watched
Total_Teams <- n_distinct(Touches_Summary$Team) #teams recorded
Total_Matches_perTeam <- 26 #Matches each team played (verified below)
Summary_Table_A <- tibble(
Variable = c("Total_Touch_Instance_count", "Total_Match_Count", "Total_Teams", "Total_Matches_perTeam"),
Value = c(
nrow(Touches_Summary),
n_distinct(Touches_Summary$SeasonMatchNumber),
n_distinct(Touches_Summary$Team),
26 # Manually verified
)
)
Matches_Summary <- Matches_Summary %>%
mutate(TeamName = case_when(
TeamName %in% c("Racing Louisville", "Racing louisville FC", "Louisville Racing") ~ "Racing Louisville FC",
TeamName %in% c("NC Courage", "Carolina Courage") ~ "North Carolina Courage",
TeamName %in% c("KC Current") ~ "Kansas City Current",
TeamName %in% c("Chicago Redstar FC") ~ "Chicago Red Stars",
TeamName %in% c("Portland Thorns") ~ "Portland Thorns FC",
TeamName %in% c("Gotham FC") ~ "NJ/NY Gotham FC",
TeamName %in% c("San Diego Wave") ~ "San Diego Wave FC",
TeamName %in% c("Seattle Reign") ~ "Seattle Reign FC",
TRUE ~ TeamName
))
TeamMatchCounts <- Matches_Summary %>%
count(TeamName, name = "MatchesPlayed")
TeamMatchCounts <- TeamMatchCounts %>%
left_join(Team_IDs,
by = c("TeamName" = "Team Name 2024 Season"))
ByTeam_Total_Touch_Instance_count <- Touches_Summary %>%
count(Team, name = "Total Season Touches") %>%
rename(TeamID = Team)
TeamMatchCounts <- TeamMatchCounts %>%
left_join(ByTeam_Total_Touch_Instance_count,
by = c("TeamID" = "TeamID"))
############################ Core Hyp data summary | Includes filtering ############################
Filtered_Touch_Summary <- Touches_CoreHyp
Filtered_Touch_Instance_count <- nrow(Filtered_Touch_Summary)
#Count of frequency of touches per team
Filtered_Touches_by_team <- Touches_by_team
Filtered_TeamMatchCounts <- TeamMatchCounts %>%
left_join(Filtered_Touches_by_team,
by = c("TeamID" = "Team"))
Summary_Table_B <- tibble(
Variable = c("Total_Touch_Instance_count", "Filtered_Touch_Instance_count", "Total_Match_Count", "Total_Teams", "Total_Matches_perTeam"),
Value = c(
Total_Touch_Instance_count,
Filtered_Touch_Instance_count,
Total_Match_Count,
Total_Teams,
Total_Matches_perTeam
)
)
knitr::kable(
Summary_Table_B,
digits = 0,
caption = "Summary of Data and Sample Sizes"
)
| Variable | Value |
|---|---|
| Total_Touch_Instance_count | 14869 |
| Filtered_Touch_Instance_count | 7955 |
| Total_Match_Count | 182 |
| Total_Teams | 14 |
| Total_Matches_perTeam | 26 |